Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Data ; 11(1): 4, 2024 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-38168517

RESUMO

Several Diptera species are known to transmit pathogens of medical and veterinary interest. However, identifying these species using conventional methods can be time-consuming, labor-intensive, or expensive. A computer vision-based system that uses Wing interferential patterns (WIPs) to identify these insects could solve this problem. This study introduces a dataset for training and evaluating a recognition system for dipteran insects of medical and veterinary importance using WIPs. The dataset includes pictures of Culicidae, Calliphoridae, Muscidae, Tabanidae, Ceratopogonidae, and Psychodidae. The dataset is complemented by previously published datasets of Glossinidae and some Culicidae members. The new dataset contains 2,399 pictures of 18 genera, with each genus documented by a variable number of species and annotated as a class. The dataset covers species variation, with some genera having up to 300 samples.


Assuntos
Ceratopogonidae , Aprendizado Profundo , Dípteros , Muscidae , Animais , Insetos
2.
Sci Rep ; 13(1): 21389, 2023 12 04.
Artigo em Inglês | MEDLINE | ID: mdl-38049590

RESUMO

Sandflies (Diptera; Psychodidae) are medical and veterinary vectors that transmit diverse parasitic, viral, and bacterial pathogens. Their identification has always been challenging, particularly at the specific and sub-specific levels, because it relies on examining minute and mostly internal structures. Here, to circumvent such limitations, we have evaluated the accuracy and reliability of Wing Interferential Patterns (WIPs) generated on the surface of sandfly wings in conjunction with deep learning (DL) procedures to assign specimens at various taxonomic levels. Our dataset proves that the method can accurately identify sandflies over other dipteran insects at the family, genus, subgenus, and species level with an accuracy higher than 77.0%, regardless of the taxonomic level challenged. This approach does not require inspection of internal organs to address identification, does not rely on identification keys, and can be implemented under field or near-field conditions, showing promise for sandfly pro-active and passive entomological surveys in an era of scarcity in medical entomologists.


Assuntos
Aprendizado Profundo , Phlebotomus , Psychodidae , Animais , Psychodidae/parasitologia , Reprodutibilidade dos Testes , Phlebotomus/parasitologia , Entomologia
3.
Biomedicines ; 11(10)2023 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-37893062

RESUMO

To characterize the growth of brain organoids (BOs), cultures that replicate some early physiological or pathological developments of the human brain are usually manually extracted. Due to their novelty, only small datasets of these images are available, but segmenting the organoid shape automatically with deep learning (DL) tools requires a larger number of images. Light U-Net segmentation architectures, which reduce the training time while increasing the sensitivity under small input datasets, have recently emerged. We further reduce the U-Net architecture and compare the proposed architecture (MU-Net) with U-Net and UNet-Mini on bright-field images of BOs using several data augmentation strategies. In each case, we perform leave-one-out cross-validation on 40 original and 40 synthesized images with an optimized adversarial autoencoder (AAE) or on 40 transformed images. The best results are achieved with U-Net segmentation trained on optimized augmentation. However, our novel method, MU-Net, is more robust: it achieves nearly as accurate segmentation results regardless of the dataset used for training (various AAEs or a transformation augmentation). In this study, we confirm that small datasets of BOs can be segmented with a light U-Net method almost as accurately as with the original method.

4.
Sci Rep ; 13(1): 17628, 2023 10 17.
Artigo em Inglês | MEDLINE | ID: mdl-37848666

RESUMO

Hematophagous insects belonging to the Aedes genus are proven vectors of viral and filarial pathogens of medical interest. Aedes albopictus is an increasingly important vector because of its rapid worldwide expansion. In the context of global climate change and the emergence of zoonotic infectious diseases, identification tools with field application are required to strengthen efforts in the entomological survey of arthropods with medical interest. Large scales and proactive entomological surveys of Aedes mosquitoes need skilled technicians and/or costly technical equipment, further puzzled by the vast amount of named species. In this study, we developed an automatic classification system of Aedes species by taking advantage of the species-specific marker displayed by Wing Interferential Patterns. A database holding 494 photomicrographs of 24 Aedes spp. from which those documented with more than ten pictures have undergone a deep learning methodology to train a convolutional neural network and test its accuracy to classify samples at the genus, subgenus, and species taxonomic levels. We recorded an accuracy of 95% at the genus level and > 85% for two (Ochlerotatus and Stegomyia) out of three subgenera tested. Lastly, eight were accurately classified among the 10 Aedes sp. that have undergone a training process with an overall accuracy of > 70%. Altogether, these results demonstrate the potential of this methodology for Aedes species identification and will represent a tool for the future implementation of large-scale entomological surveys.


Assuntos
Aedes , Ochlerotatus , Animais , Mosquitos Vetores , Aprendizado de Máquina , Especificidade da Espécie
5.
Sci Rep ; 13(1): 13895, 2023 08 25.
Artigo em Inglês | MEDLINE | ID: mdl-37626130

RESUMO

We present a new and innovative identification method based on deep learning of the wing interferential patterns carried by mosquitoes of the Anopheles genus to classify and assign 20 Anopheles species, including 13 malaria vectors. We provide additional evidence that this approach can identify Anopheles spp. with an accuracy of up to 100% for ten out of 20 species. Although, this accuracy was moderate (> 65%) or weak (50%) for three and seven species. The accuracy of the process to discriminate cryptic or sibling species is also assessed on three species belonging to the Gambiae complex. Strikingly, An. gambiae, An. arabiensis and An. coluzzii, morphologically indistinguishable species belonging to the Gambiae complex, were distinguished with 100%, 100%, and 88% accuracy respectively. Therefore, this tool would help entomological surveys of malaria vectors and vector control implementation. In the future, we anticipate our method can be applied to other arthropod vector-borne diseases.


Assuntos
Anopheles , Artrópodes , Aprendizado Profundo , Animais , Humanos , Mosquitos Vetores , Irmãos
6.
Front Neurosci ; 17: 1220172, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37650105

RESUMO

Introduction: Datasets containing only few images are common in the biomedical field. This poses a global challenge for the development of robust deep-learning analysis tools, which require a large number of images. Generative Adversarial Networks (GANs) are an increasingly used solution to expand small datasets, specifically in the biomedical domain. However, the validation of synthetic images by metrics is still controversial and psychovisual evaluations are time consuming. Methods: We augment a small brain organoid bright-field database of 40 images using several GAN optimizations. We compare these synthetic images to the original dataset using similitude metrcis and we perform an psychovisual evaluation of the 240 images generated. Eight biological experts labeled the full dataset (280 images) as syntetic or natural using a custom-built software. We calculate the error rate per loss optimization as well as the hesitation time. We then compare these results to those provided by the similarity metrics. We test the psychovalidated images in a training step of a segmentation task. Results and discussion: Generated images are considered as natural as the original dataset, with no increase of the hesitation time by experts. Experts are particularly misled by perceptual and Wasserstein loss optimization. These optimizations render the most qualitative and similar images according to metrics to the original dataset. We do not observe a strong correlation but links between some metrics and psychovisual decision according to the kind of generation. Particular Blur metric combinations could maybe replace the psychovisual evaluation. Segmentation task which use the most psychovalidated images are the most accurate.

7.
Sci Rep ; 12(1): 20086, 2022 11 22.
Artigo em Inglês | MEDLINE | ID: mdl-36418429

RESUMO

A simple method for accurately identifying Glossina spp in the field is a challenge to sustain the future elimination of Human African Trypanosomiasis (HAT) as a public health scourge, as well as for the sustainable management of African Animal Trypanosomiasis (AAT). Current methods for Glossina species identification heavily rely on a few well-trained experts. Methodologies that rely on molecular methodologies like DNA barcoding or mass spectrometry protein profiling (MALDI TOFF) haven't been thoroughly investigated for Glossina sp. Nevertheless, because they are destructive, costly, time-consuming, and expensive in infrastructure and materials, they might not be well adapted for the survey of arthropod vectors involved in the transmission of pathogens responsible for Neglected Tropical Diseases, like HAT. This study demonstrates a new type of methodology to classify Glossina species. In conjunction with a deep learning architecture, a database of Wing Interference Patterns (WIPs) representative of the Glossina species involved in the transmission of HAT and AAT was used. This database has 1766 pictures representing 23 Glossina species. This cost-effective methodology, which requires mounting wings on slides and using a commercially available microscope, demonstrates that WIPs are an excellent medium to automatically recognize Glossina species with very high accuracy.


Assuntos
Tripanossomíase Africana , Moscas Tsé-Tsé , Animais , Humanos , Aprendizado de Máquina , Bases de Dados Factuais , Doenças Negligenciadas , Espectrometria de Massas por Ionização e Dessorção a Laser Assistida por Matriz
8.
J Imaging ; 7(2)2021 Feb 03.
Artigo em Inglês | MEDLINE | ID: mdl-34460624

RESUMO

Bio-inspired Event-Based (EB) cameras are a promising new technology that outperforms standard frame-based cameras in extreme lighted and fast moving scenes. Already, a number of EB corner detection techniques have been developed; however, the performance of these EB corner detectors has only been evaluated based on a few author-selected criteria rather than on a unified common basis, as proposed here. Moreover, their experimental conditions are mainly limited to less interesting operational regions of the EB camera (on which frame-based cameras can also operate), and some of the criteria, by definition, could not distinguish if the detector had any systematic bias. In this paper, we evaluate five of the seven existing EB corner detectors on a public dataset including extreme illumination conditions that have not been investigated before. Moreover, this evaluation is the first of its kind in terms of analysing not only such a high number of detectors, but also applying a unified procedure for all. Contrary to previous assessments, we employed both the intensity and trajectory information within the public dataset rather than only one of them. We show that a rigorous comparison among EB detectors can be performed without tedious manual labelling and even with challenging acquisition conditions. This study thus proposes the first standard unified EB corner evaluation procedure, which will enable better understanding of the underlying mechanisms of EB cameras and can therefore lead to more efficient EB corner detection techniques.

9.
Front Neurosci ; 15: 629067, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34276279

RESUMO

Purpose: Since their first generation in 2013, the use of cerebral organoids has spread exponentially. Today, the amount of generated data is becoming challenging to analyze manually. This review aims to overview the current image acquisition methods and to subsequently identify the needs in image analysis tools for cerebral organoids. Methods: To address this question, we went through all recent articles published on the subject and annotated the protocols, acquisition methods, and algorithms used. Results: Over the investigated period of time, confocal microscopy and bright-field microscopy were the most used acquisition techniques. Cell counting, the most common task, is performed in 20% of the articles and area; around 12% of articles calculate morphological parameters. Image analysis on cerebral organoids is performed in majority using ImageJ software (around 52%) and Matlab language (4%). Treatments remain mostly semi-automatic. We highlight the limitations encountered in image analysis in the cerebral organoid field and suggest possible solutions and implementations to develop. Conclusions: In addition to providing an overview of cerebral organoids cultures and imaging, this work highlights the need to improve the existing image analysis methods for such images and the need for specific analysis tools. These solutions could specifically help to monitor the growth of future standardized cerebral organoids.

10.
Front Neurosci ; 12: 135, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29695948

RESUMO

This paper introduces a color asynchronous neuromorphic event-based camera and a methodology to process color output from the device to perform color segmentation and tracking at the native temporal resolution of the sensor (down to one microsecond). Our color vision sensor prototype is a combination of three Asynchronous Time-based Image Sensors, sensitive to absolute color information. We devise a color processing algorithm leveraging this information. It is designed to be computationally cheap, thus showing how low level processing benefits from asynchronous acquisition and high temporal resolution data. The resulting color segmentation and tracking performance is assessed both with an indoor controlled scene and two outdoor uncontrolled scenes. The tracking's mean error to the ground truth for the objects of the outdoor scenes ranges from two to twenty pixels.

11.
Front Neurosci ; 10: 391, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27642275

RESUMO

The asynchronous time-based neuromorphic image sensor ATIS is an array of autonomously operating pixels able to encode luminance information with an exceptionally high dynamic range (>143 dB). This paper introduces an event-based methodology to display data from this type of event-based imagers, taking into account the large dynamic range and high temporal accuracy that go beyond available mainstream display technologies. We introduce an event-based tone mapping methodology for asynchronously acquired time encoded gray-level data. A global and a local tone mapping operator are proposed. Both are designed to operate on a stream of incoming events rather than on time frame windows. Experimental results on real outdoor scenes are presented to evaluate the performance of the tone mapping operators in terms of quality, temporal stability, adaptation capability, and computational time.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...